Udacity for Self Driving Cars Engineer

生活日志

自动驾驶工程师
begin date: 01/11/2021 end date: 31/12/2021

1. Welcome to Self Driving Car Engineer Nano-degree

Lesson 1: An Introduction to your nano-degree program

Welcome to Udacity
The Udacity Experience
How to Succeed
1.- Golas
2. - Define learning path
3. - Define goals
4. - Goal could be to learn more
5. - Keep goals in mind
6. - Set short and long term goals
7. - Create incremental plan
8.- Accountability
9. - Build in accountability to meet the goal
10. - Make learning a habit
11. - Remember your larger goal
12. - Build rewards for your learning habit
13.- Learning Strategies
14. - Break code logic into pseudocode
15. - Be patient with yourself
16. - Remember your goals
17.- Technical advice
18. - Coding requires debugging
19. - Seek solutions
20. - Look at successful examples
21. - Build code incrementally
22.- Summary
23. - Set goals and accountability measures
24. - Break down goals into smaller goals
25. - Focus on smaller goals
26. - Work through exercises
27. - Pseudocode first, code last

Lesson 2: Getting help

What it Takes

Completing a Udacity program takes perseverance and dedication, but the rewards outweigh the challenges. Throughout your program, you will develop and demonstrate specific skills that will serve you for a lifetime. Congratulations on taking the first step towards developing the skills you need to power your career through tech education!

The videos, text lessons, and quizzes you encounter in the classroom are optional but recommended. The project at the end of this course will test your ability to apply the skills and strategies you have learned in the classroom to real-world problems. It will also provide tangible outputs you can use to demonstrate your skills for current and future employers.

The project is designed to be challenging. Many students initially struggle, but with a little grit, they are able to learn from their mistakes and build their skills. Data from nearly 100,000 Udacity graduates show that commitment and persistence are the highest predictors of whether or not a student will graduate.

At some point, nearly every student will get stuck on a new concept or skill, and doubt may set in. Don’t panic. Don’t quit. Be patient, and work through the problem. Remember that you are not alone and the problem that you are encountering is likely one that many others have experienced as well. Whether you are stuck or simply looking for encouragement, you’ll find Udacity Mentors and students there to help.

完成一个大胆计划需要毅力和奉献精神,但回报大于挑战。在整个计划中,您将培养和展示为您服务一生的具体技能。祝贺您迈出第一步,通过技术教育发展您的事业所需的技能!

您在课堂上遇到的视频、课文课程和测验是可选的,但建议使用。本课程结束时的项目将测试您将课堂中学到的技能和策略应用于实际问题的能力。它还将提供有形的产出,您可以使用这些产出来为当前和未来的雇主展示您的技能。

该项目的设计具有挑战性。许多学生最初都在挣扎,但稍有勇气,他们就能从错误中吸取教训,并培养自己的技能。来自近10万名Udacity毕业生的数据显示,承诺和坚持是学生能否毕业的最高预测因素。

在某些时候,几乎每个学生都会陷入一个新的概念或技能,怀疑可能会开始。不要惊慌。不要放弃。要有耐心,并努力解决这个问题。请记住,你并不孤单,你遇到的问题很可能是其他许多人也经历过的问题。无论你是被卡住还是只是寻求鼓励,你都会在那里找到大胆的导师和学生来帮忙。

Getting Help

Lesson 3: 认识 Waymo

2. Computer Vision

Lesson 1: Introduction to Deep Learning for Computer Vision

导师,Thomas Hossler
先决条件
Lesson Outline 课程大纲
课程大纲
Key Stakeholders

Self-driving cars or autonomous vehicles will have a huge impact on our society once the technology is deployed at scale. The following articles highlight the economic impact as well as the broader consequences of the technology.

  • economic impact

  • broader consequences

  • insurance companies

  • city planners

  • lawmakers

  • daily drivers

  • Alt text

  • Introduction to Deep Learning and Computer Vision
    Why CV Is import for SDC
    History of Deep Learning
    TensorFlow
    Register for the Waymo Open Dataset

    注册Waymo 开放数据集

    Tools, Environment & Dependencies
    Project : Object Detection in an Urban Environment

    Alt text

    Lesson 2: The Machine Learning Workflow

    1. Introduction to the Machine Learning Workflow

    在本课中,我们将学习如何思考机器学习问题。机器学习 (ML) 不仅涉及酷炫的数学和建模,还涉及选择设置问题、确定客户需求和长期目标。本课将安排如下:

    我们将通过识别关键利益相关者并选择正确的指标来练习构建机器学习问题。
    因为机器学习是关于数据的,我们将讨论与数据相关的不同挑战。
    我们还将解决在解决 ML 问题时如何组织您的数据集,以确保您创建的模型能够在新数据上表现良好。
    最后,我们将看到您如何利用不同的工具来查明模型的局限性。
    在本课程中,我们将多次使用德国交通标志识别基准 (GTSRB)进行练习。数据集的下采样版本已下载到您的工作区。

    Alt text

    Alt text

    2. Big Picture

    In the following videos and lessons, we are going to take a deeper dive into each component of the workflow.

    Alt text

    Alt text
    detects sharks from an overview perspective

    3. Framing the Problem

    Alt text
    除非您参加机器学习竞赛,否则模型的性能很少是您唯一关心的事情。例如,在自动驾驶汽车系统中,模型的推理时间(提供预测所需的时间)也是一个重要因素。每秒可以消化 5 张图像的模型比每秒只能处理一张图像的模型要好,即使第二张图像表现更好。在这种情况下,推理时间也是选择我们模型的一个指标。

    了解您的数据管道非常重要,因为它将推动您的模型开发。在某些情况下,获取新数据相对容易,但注释它们(例如通过关联类名)可能很昂贵。在这种情况下,您可能希望创建一个需要较少数据或可以处理未标记数据的模型。

    4. Framing the Problem Quizzes
    5. Identifying the Key Stakeholders

    As a Machine Learning Engineer, you will rarely be the end user of your product. Therefore, you need to pinpoint the different stakeholders of the problem you are trying to solve. Why? Because this will drive your model development.

    Alt text

    6. Identifying the Key Stakeholders Quizzes
    7. Choosing Metrics

    Alt text

    Alt text

    Alt text

    8. Choosing Metrics Quiz

    IOU: Congratulations! This definition of IoU can be used for semantic segmentation problems where we try to classify each pixel in an image. For object detection however, we will see a more efficient definition!

    9. Using A Desktop Workspace
    10. Exercise: Choosing Metrics

    IOU: Intersection over Union 联合交叉
    计算IOU 方法: https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/

    11. Solution: Choosing Metrics
    12. Data Acquisition and Visualization

    Alt text
    In many cases, you will need to gather your own data but in some, you will be able to leverage Open Source datasets, such as the Google Open Image Dataset. However, keep in mind the end goal and where your algorithm will be deployed or used.

    Because of something called domain gap, an algorithm trained on a specific dataset may not perform well on another. For example, a pedestrian detection algorithm trained on data gathered with a specific camera may not be able to accurately detect pedestrians on images captured with another camera.

    13. Data Acquisition and Visualization Quizzes
    14. Exercise: Data Acquisition and Visualization
    15. Solution: Data Acquisition and Visualization
    16. Exploratory Data Analysis

    Alt text
    机器学习算法可能对域转移非常敏感。这种领域转移可能发生在不同的层面:

    天气/光照条件:例如,仅在晴天图像上训练的算法在显示雨天或夜间数据时不会表现良好。
    传感器:传感器的变化或不同的处理方法将产生域转移。
    环境:例如,在低强度交通数据上训练的算法在高强度交通数据上表现不佳。
    广泛的探索性数据分析 (EDA)对于任何 ML 项目的成功都至关重要。为什么?因为在这个阶段,机器学习工程师熟悉数据集并发现数据的任何潜在挑战。EDA 是该项目的重要组成部分,以至于 ML 工程师单独花费了几天时间。对于视觉问题,它需要查看数据集中的 1,000 幅图像!

    Machine Learning algorithms may be very sensitive to domain shift. This domain shift can happen at different levels:

    weather / light conditions: for example, an algorithm trained only on sunny images is not going to perform well when shown rainy or night-time data.
    sensor: a sensor change or different processing methods will create a domain shift.
    environment: an algorithm trained on low intensity traffic data will not perform well on high intensity traffic data for example.
    An extensive Exploratory Data Analysis (EDA) is critical to the success of any ML project. Why? Because during this phase, the ML engineer gets acquainted with the dataset and discovers any potential challenges with the data. The EDA is such an important part of the project that ML engineers spend a few days on it alone. For a vision problem, it requires looking at 1,000s of images in your dataset!

    17. Exploratory Data Analysis Quizzes

    Alt text

    18. Cross Validation

    The goal of our ML algorithm is to be deployed in a production environment. For example, the object detection algorithm you will create in the final project could be deployed directly in a self driving car. But before we can deploy such algorithms, we need to be sure that it will perform well in any environments it will encounter. In other words, we want to evaluate the generalization ability of our model.

    We are going to introduce three new concepts:

    Alt text
    当模型过度拟合时,它就失去了泛化能力。当所选模型过于复杂并开始提取噪声而不是有意义的特征时,通常会发生这种情况。例如,当汽车检测模型开始提取数据集中汽车的品牌特定特征(例如汽车标志)而不是更广泛的特征(车轮、形状等)时,它就会过度拟合。

    过拟合提出了一个非常重要的问题。我们如何知道我们的模型是否可以正确泛化?事实上,当单个数据集可用时,要知道我们是否创建了一个过度拟合或只是表现良好的模型将具有挑战性。

    现在,我们将使用术语训练数据来描述用于教授和创建算法的数据,以及针对任何新的、看不见的数据的测试数据。

    Alt text
    The bias-variance tradeoff illustrates one the most important challenges in Machine Learning. How do we create a model that performs well while keeping its ability to generalize to new, unseen data? The performance of our algorithm on such data is quantified by the test error. The test error can be decomposed in further into the bias and the variance.

    The bias quantifies the quality of the fit of our model on the training data. A low bias means that our model has a very low error rate on the training dataset.

    The variance quantifies the sensitivity of the model to the training data. In other words, if we were to replace our training dataset with another one, how much would the training error rate change? A low variance means that our model is not sensitive to the training data and generalizes well.

    Validation Sets & Cross Validation

    Alt text

    Alt text

    Cross validation is a set of techniques to evaluate the capacity of our model to generalize and alleviate the overfitting challenges. In this course, we will leverage the validation set approach, where we split the available data into two splits:

    a training set, used to create our algorithm (usually 80-90% of the available data)
    a validation set used to evaluate it (10-20% of the available data)
    In further videos, we will see how we can leverage this approach to alleviate the overfitting problem.

    Other cross validation methods exist, such as LOO (Leave One Out) or k-fold cross validation but they are not suited to Deep Learning algorithms. You can read more about these other two techniques here.

    19.Cross Validation Quizzes
    20. TFRecord

    TF Records are TensorFlow’s custom data format. Even though they are not technically required to train a model with TensorFlow, they can be very useful. For some pre-existing TensorFlow APIs, such as the object detection API that we will use for the final project, a TF Record format is required to train models.

    Alt text

    Waymo Open Dataset vs. TensorFlow Object Detection API
    In the final project of this course, you will use data from the Waymo Open Dataset with the TensorFlow Object Detection API to perform object detection on camera images. While each use .tfrecord files, there is a difference in the structure of each. As such, the upcoming exercise will have you take a .tfrecord from the Waymo Open Dataset and convert it into a new .tfrecord useable by the TensorFlow Object Detection API.

    While also linked in the exercise itself, you’ll need a few resources to be able to do so more easily.

    First, this repository gives some additional information around the Waymo Open Dataset itself (note that Waymo link to https://waymo.com/open/data/ therein now should be https://waymo.com/open/data/perception, as Waymo has also added a “motion” component to the previous perception-only dataset).
    Secondly, this tutorial for the TF Object Detection API for converting from .xml to .tfrecord also shows certain steps that will apply in our case as well.
    This exercise will require some research on your own of the above documentation (and potentially other documentation) to reach a converted file; however, if you get stuck, it is perfectly reasonable to skip ahead to the solution video for some assistance.

    Additional Resources

    The above documentation will be useful to refer to as you work on the upcoming exercise.

    21. Exercise: TFRecord
    22. Solution: TFRecord
    23. Model Selection

    Alt text

    Alt text

    Alt text

    ML Engineers get very excited about creating new models. However, before diving into this step of the ML workflow, one must set realistic expectations, by setting up baselines.

    A lower bound baseline gives you an idea of a minimum expected performance. If you are getting metrics below such baseline, a red flag should be raised and should be concerned that something is wrong with your training pipeline. For example, for a classification problem, the random guess baseline is a good lower bound. Given C classes, the accuracy of your accuracy of your algorithm should be higher than 1/C.

    An upper bound baseline gives you a sense of the maximum expected performance. If a client comes to you and asks for an algorithm that classifies images correctly 100% of the time, you can safely let them know that it won’t happen. Human performance is a good upper bound baseline. For a classification problem, you should try to manually classify 100s of images to get an idea of what level of performance your algorithm could reach.

    Model selection is a dynamic part of the ML workflow. It requires many iterations. Unless you have some prior knowledge of the task, it is recommended to start with simple models and iterate on complexity. Keep in mind that the validation set should remain the same during this phase!

    24. Model Selection Quizzes
    25. Error Analysis

    Validation set metrics are a good indicator of global performances of the model but we often need a finer understanding. A metric like accuracy won’t tell you if a certain class of objects is always misclassified, for example. For these reasons, one must perform an in-depth error analysis before iterating on the model.

    Sorting predictions based on the metric or loss values is always a useful way to identify error patterns.
    验证集指标是模型全局性能的一个很好的指标,但我们通常需要更深入的理解。例如,像准确性这样的指标不会告诉您某类对象是否总是被错误分类。由于这些原因,必须在对模型进行迭代之前进行深入的错误分析。

    根据度量或损失值对预测进行排序始终是识别错误模式的有用方法。

    Alt text

    26. Error Analysis Quizzes

    Alt text
    Congratulations! If the training dataset is missing examples of data that occurs in the validation set, you should try to increase its size (or use augmentations, as we will see in later lessons).

    27. Waymo: The Factory Model
    28. Lesson Conclusion

    Alt text

    The Machine Learning workflow is organized as follow:

    Lesson 3: Sensor and Camera Calibration

    Learn how to calibrate you camera to remove distortions for improved perception.

    1. Intro to the Camera Sensor

    Alt text

    Alt text

    This lesson will be organized as follow:

    本课将安排如下:

    Alt text

    2. Big Picture

    Alt text

    Cameras are optical instruments capturing the light intensity on a digital image. The most important characteristics of a camera for a ML engineer are the following:

    Alt text

    Alt text

    3. Distortion Correction 失真校正

    Distortion
    Image distortion occurs when a camera looks at 3D objects in the real world and transforms them into a 2D image; this transformation isn’t perfect. Distortion actually changes what the shape and size of these 3D objects appear to be. So, the first step in analyzing camera images, is to undo this distortion so that you can get correct and useful information out of them.

    Alt text
    失真
    当相机观察现实世界中的 3D 对象并将它们转换为 2D 图像时,就会发生图像失真;这种转变并不完美。失真实际上会改变这些 3D 对象的形状和大小。因此,分析相机图像的第一步是消除这种失真,以便您可以从中获得正确和有用的信息。

    5. 失真校正测试

    为什么校正图像失真很重要?

    6. Pinhole Camera Model 真空相机模型

    Alt text

    Alt text

    Alt text

    Alt text

    Alt text

    Types of Distortion

    Real cameras use curved lenses to form an image, and light rays often bend a little too much or too little at the edges of these lenses. This creates an effect that distorts the edges of images, so that lines or objects appear more or less curved than they actually are. This is called radial distortion, and it’s the most common type of distortion.

    Another type of distortion, is tangential distortion. This occurs when a camera’s lens is not aligned perfectly parallel to the imaging plane, where the camera film or sensor is. This makes an image look tilted so that some objects appear farther away or closer than they actually are.

    Distortion Coefficients and Correction
    There are three coefficients needed to correct for radial distortion: k1, k2, and k3. To correct the appearance of radially distorted points in an image, one can use a correction formula.

    In the following equations, (x, y) is a point in a distorted image. To undistort these points, OpenCV calculates r, which is the known distance between a point in an undistorted (corrected) image and the center of the image distortion, which is often the center of that image . This center point is sometimes referred to as the distortion center. These points are pictured below.

    Note: The distortion coefficient k3 is required to accurately reflect major radial distortion (like in wide angle lenses). However, for minor radial distortion, which most regular camera lenses have, k3 has a value close to or equal to zero and is negligible. So, in OpenCV, you can choose to ignore this coefficient; this is why it appears at the end of the distortion values array: [k1, k2, p1, p2, k3]. In this course, we will use it in all calibration calculations so that our calculations apply to a wider variety of lenses (wider, like wide angle, haha) and can correct for both minor and major radial distortion.

    Alt text

    Points in a distorted and undistorted (corrected) image. The point (x, y) is a single point in a distorted image and (x_corrected, y_corrected) is where that point will appear in the undistorted (corrected) image.



    Radial distortion correction.

    There are two more coefficients that account for tangential distortion: p1 and p2, and this distortion can be corrected using a different correction formula.



    Tangential distortion correction.

    6. Pinhole Camera Model Quizzes
    7. Camera Calibration

    Examples of Useful Code:

    1.# Converting an image, imported by cv2 or the glob API, to grayscale:
    2.gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    3.
    4.# Finding chessboard corners (for an 8x6 board):
    5.ret, corners = cv2.findChessboardCorners(gray, (8,6), None)
    6.
    7.# Drawing detected corners on an image:
    8.img = cv2.drawChessboardCorners(img, (8,6), corners, ret)
    9.
    10.# Camera calibration, given object points, image points, and the shape of the grayscale image:
    11.ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
    12.
    13.# Undistorting a test image:
    14.dst = cv2.undistort(img, mtx, dist, None, mtx)

    关于图像形状的注释
    传递给calibrateCamera函数的图像的形状就是图像的高度和宽度。检索这些值的一种方法是从灰度图像形状数组中检索它们gray.shape[::-1]。这将返回像素值中的图像宽度和高度,例如 (1280, 960)。

    检索图像形状的另一种方法是通过使用 检索彩色图像形状数组中的前两个值,直接从彩色图像中获取它们img.shape[1::-1]。此代码片段仅要求形状数组中的前两个值,并将它们反转。请注意,在我们的示例中,我们使用的是灰度图像,因此我们只有 2 个维度(彩色图像具有三个维度,高度、宽度和深度),因此这不是必需的。

    使用整个灰度图像形状或彩色图像形状的前两个值很重要。这是因为彩色图像的整个形状将包括第三个值——颜色通道的数量——除了图像的高度和宽度。例如,彩色图像的形状数组可能是 (960, 1280, 3),它们是图像 (960, 1280) 的像素高度和宽度以及表示颜色中三个颜色通道的第三个值 (3) image 稍后您将了解更多信息,如果您尝试将这三个值传递给calibrateCamera 函数,您将收到错误消息。

    8. Exercise: Camera Calibration
    9. Solution: Camera Calibration
    10. Image Manipulation 图像处理

    Grayscale images are single channel images that only contain information about the intensity of the light.

    Color models are mathematical models used to describe digital images. The Red, Green, Blue (RGB) color model describes images using three channels. Each pixel in this model is described by a triplet of values, usually 8-bit integers. This is the most common color model used in ML. HLS/HSV are also very popular color models. They take a different approach than the RGB model by encoding the color with a single value, the hue. The other two values characterize the darkness / colorfulness of the image.
    灰度图像是单通道图像,仅包含有关光强度的信息。

    颜色模型是用于描述数字图像的数学模型。的红,绿,蓝(RGB)颜色模型中使用三个信道描述的图像。该模型中的每个像素都由一组值描述,通常是 8 位整数。这是机器学习中最常用的颜色模型。HLS/HSV也是非常流行的颜色模型。它们采用与 RGB 模型不同的方法,将颜色编码为单个值,即色调。另外两个值表征图像的暗度/色彩

    Alt text

    Alt text

    Alt text

    Alt text

    Alt text

    11. Image Manipulation Quizzes
    12. Exercise: Image Manipulation
    13. Solution: Image Manipulation
    14. Pixel Level Transformation

    Pillow is a python imaging library. Using Pillow, we can easily load images, convert them from one color model to another and perform diverse pixel level transformation, such as color thresholding. Color thresholding consists of isolating a range of color from a digital image. It can be done using different color models, but the HSV/HLS color models are particularly well suited for this task.

    You can use the workspace below to try out the same code from the video as well as the second video further down the page.

    Image Enhancement and Filtering
    Images in ML dataset reflect real life conditions and therefore may need to be improved upon or modified. Pillow provides a very useful module, ImageEnhance, to perform pixel level transformations on images, such as contrast changes. Moreover, ML engineers often want to add some noise to the images in the dataset to reduce overfitting. ImageEnhance provides simple ways of doing so.

    15. Pixel Level Transformation Quizzes
    16. Geometric Transformation

    In addition to pixel level transformation, Pillow also provides ways to perform geometric transformations, such as rotation, resizing or translation. In particular, we can use Pillow to perform affine transformation (a geometric transformation where lines are preserved) using a transformation matrix.
    除了像素级变换之外,Pillow 还提供了执行几何变换的方法,例如旋转、调整大小或平移。特别是,我们可以使用 Pillow使用变换矩阵执行仿射变换(保留线的几何变换)。

    17. Geometric Transformation Quizzes

    几何变换测试

    18. Exercise: Geometric Transformation
    19. Solution: Geometric Transformation
    20. Lesson Conclusion

    Alt text

    In this lesson, we learned about:

    -The camera sensor and its distortion effect. A camera captures light to a digital sensor but the raw images are distorted.

    Lesson 4: From Linear Regression to Feedforward Neural Networks

    Lesson 5: Image Classification with CNNs

    Lesson 6: Object Detection in Images

    Project: Object Detection in an Urban Environment

    Use the Waymo dataset to detect objects in an urban environment.

    3. Sensor Fusion

    Lesson 1: Introduction to Sensor Fusion and Perception

    1. Introduction

    Welcome to this lesson on LiDAR technology. Without LiDAR sensors, we will most probably not see fully self-driving cars become a reality.

    In the first chapter, we will start with the general role of LiDAR in autonomous driving first. You will learn about the various levels of autonomous driving, get a brief introduction to camera, LiDAR and radar and we will discuss criteria you need to consider for sensor selection.

    In the second chapter, we will look at the LiDAR sensors used in Waymo vehicles. We will briefly look into the most important technical specifications, discuss the structure of the Waymo Open Dataset and I will introduce you to the course starter code for many of the exercises.

    In the third chapter, you will focus on LiDAR technology. You will learn about the LiDAR working principle, the LiDAR equation and the meaning of multiple signal returns. Also, you will get an overview of currently available LiDAR types and the major differences between them.

    We will also look at the concept of range images used in the Waymo dataset. You will learn how range images are structured and how you can transform them into 3d point-clouds.

    2. The Role of Lidar in Autonomous Driving

    作为扫描 LiDAR 的替代方案,还有非扫描传感器,也称为Flash LiDAR。术语“闪光”指的是视场完全由激光源照亮,就像带有闪光灯的相机一样,而光电探测器阵列同时接收反射的激光脉冲。

    Flash LiDAR 传感器没有任何移动部件,这就是为什么它们抗振动并且封装尺寸比扫描 LiDAR 传感器小得多。与屋顶安装的 LiDAR 类型相比,这种传感器类型的缺点是范围有限且视野相对较窄。在自动驾驶汽车中,扫描和非扫描 LiDAR 都用于观察车辆周围的不同区域:安装在车顶的扫描 LiDAR 可生成 360 度视图,直到大约 80-100m 而非扫描 LiDAR 传感器(通常安装在四个角落)在顶部安装的传感器盲区观察车辆的直接附近。

    其他传感器类型
    除了摄像头、雷达和 LiDAR,还有其他类型的传感器可用,例如超声波传感器(自 1990 年代以来广泛用于停车应用)或立体摄像头(有时也称为伪 LiDAR)。但是,这些传感器超出了本课程的范围。从传感器融合的角度来看,将摄像头传感器与 LiDAR 或雷达或两者结合使用是最有意义的,以获得可靠且准确的车辆周围环境重建。

    3. LiDAR vs. Radar vs. Cameras
    4. Lidar Data in the Waymo Dataset

    As you can see from the following image, Waymo also used several cameras as well as radar sensors for front / back surveillance.

    Alt text

    LiDAR Blind Spot and Beam Gap Widening
    由于激光束被车辆遮挡,在车辆正前方存在较大的感知差距(“盲点”)。此外,可以看出,由于激光二极管垂直放置的角度固定,相邻光束之间的间隙随着距离的增加而变宽。

    5. Using A Desktop Workspace
    6. Exercise: Lidar Data in the Waymo Dataset
    7. Solution: Lidar Data in the Waymo Dataset
    8. The Structure of Frames in the Waymo Dataset
    9. Lidar Technical Properties

    Alt text

    LiDAR beam start and stop pulse

    Alt text

    Alt text

    Other time-of-flight methods are radar and ultrasound. Of these three ToF techniques, LiDAR provides the highest angular resolution, because of its significantly smaller beam divergence. It thus allows a better separation of adjacent objects in a scene, as illustrated in the following figure:

    Alt text
    LiDAR and radar beam divergence

    Alt text

    Alt text

    10. Overview of Available LiDAR Types

    The following figure shows a typical classification of LiDAR sensors:

    Alt text

    Scanning LiDAR - Motorized Opto-Mechanical Scanning
    Motorized optomechanical scanners are the most common type of LiDAR scanners. In 2007, the company Velodyne, a pioneer in LiDAR technology, released a 64-beam rotating line scanner, which has clearly shaped and dominated the autonomous vehicle industry in their early years. The most obvious advantages of this scanner type are its long ranging distance, the wide horizontal field-of-view and the fast scanning speed.

    Alt text

    Alt text

    Flash LiDAR vs. line and raster scanning

    Alt text

    11. Lidar Selection Criteria 雷达选择标准

    Lesson 2: The Lidar Sensor

    Lesson 3: Detecting Objects in Lidar

    Mid-term Project: 3D Object Detection

    4. Localization

    5. Planning

    6. Glossary (术语表)